Facial Expression Synthesis Based on Emotion Dimensions for Affective Talking Avatar
نویسندگان
چکیده
Facial expression is one of the most expressive ways for human beings to deliver their emotion, intention, and other nonverbal messages in face to face communications. In this chapter, a layered parametric framework is proposed to synthesize the emotional facial expressions for an MPEG4 compliant talking avatar based on the three dimensional PAD model, including pleasure-displeasure, arousal-nonarousal and dominance-submissiveness. The PAD dimensions are used to capture the high-level emotional state of talking avatar with specific facial expression. A set of partial expression parameter (PEP) is designed to depict the expressive facial motion patterns in local face areas, and reduce the complexity of directly manipulation of low-level MPEG4 facial animation parameters (FAP). The relationship among the emotion (PAD), expression (PEP) and animation (FAP) parameter is analyzed on a virtual facial expression database. Two levels of parameter mapping are implemented, namely the emotion-expression mapping from PAD to PEP, and the linear interpolation from PEP to FAP. The synthetic emotional facial expression is combined with the talking avatar speech animation in a text to audio visual speech system. Perceptual evaluation shows that our approach can generate appropriate facial expressions for subtle and complex emotions defined by PAD and thus enhance the emotional expressivity of talking avatar.
منابع مشابه
Appraisal Inference from Synthetic Facial Expressions
Facial expression research largely relies on forced-choice paradigms that ask observers to choose a label to describe the emotion expressed, assuming a categorical encoding and decoding process. In contrast, appraisal theories of emotion suggest that cognitive appraisal of a situation and the resulting action tendencies determine facial actions in a complex cumulative and sequential process. It...
متن کاملFacial Expression Synthesis Using PAD Emotional Parameters for a Chinese Expressive Avatar
Facial expression plays an important role in face to face communication in that it conveys nonverbal information and emotional intent beyond speech. In this paper, an approach for facial expression synthesis with an expressive Chinese talking avatar is proposed, where a layered parametric framework is designed to synthesize intermediate facial expressions using PAD emotional parameters [5], whi...
متن کاملPhoto-realistic expressive text to talking head synthesis
A controllable computer animated avatar that could be used as a natural user interface for computers is demonstrated. Driven by text and emotion input, it generates expressive speech with corresponding facial movements. To create the avatar, HMM-based text-to-speech synthesis is combined with active appearance model (AAM)-based facial animation. The novelty is the degree of control achieved ove...
متن کاملAn Affective User Interface Based on Facial Expression Recognition and Eye-Gaze Tracking
This paper describes a pipeline by which facial expression and eyegaze of the user are tracked, and then 3D facial animation is synthesized in the remote place based upon timing information of the facial and eye movement information. The system first detects a facial area within the given image and then classifies its facial expression into 7 emotional weightings. Such weighting information, tr...
متن کاملAn Integrated Approach to Emotion Recognition for Advanced Emotional Intelligence
Emotion identification is beginning to be considered as an essential feature in human-computer interaction. However, most of the studies are mainly focused on facial expression classifications and speech recognition and not much attention has been paid until recently to physiological pattern recognition. In this paper, an integrative approach is proposed to emotional interaction by fusing multi...
متن کامل